Audio-Visual Depth and Material Estimation for Robot Navigation

Justin Wilson1 and Nicholas Rewkowski2 and Ming C. Lin2

1University of North Carolina at Chapel Hill, 2University of Maryland, College Park




ABSTRACT

Reflective and textureless surfaces such as win- dows, mirrors, and walls can be a challenge for scene recon- struction, due to depth discontinuities and holes. We propose an audio-visual method that uses the reflections of sound to aid in depth estimation and material classification for 3D scene reconstruction in robot navigation and AR/VR applications. The mobile phone prototype emits pulsed audio, while recording video for audio-visual classification for 3D scene reconstruction. Reflected sound and images from the video are input into our audio (EchoCNN-A) and audio-visual (EchoCNN-AV) convolu- tional neural networks for surface and sound source detection, depth estimation, and material classification. The inferences from these classifications enhance 3D scene reconstructions containing open spaces and reflective surfaces by depth filtering, inpainting, and placement of unmixed sound sources in the scene. Our prototype, demos, and experimental results from real-world with challenging surfaces and sound, also validated with virtual scenes, indicate high success rates on classification of material, depth estimation, and closed/open surfaces, leading to considerable improvement in 3D scene reconstruction for robot navigation.


     
     

DEMO VIDEO

Video


DATASETS

Dataset download, Datasets